Goto

Collaborating Authors

 hard attention model


Saccader: Improving Accuracy of Hard Attention Models for Vision

Neural Information Processing Systems

Although deep convolutional neural networks achieve state-of-the-art performance across nearly all image classification tasks, their decisions are difficult to interpret. One approach that offers some level of interpretability by design is \textit{hard attention}, which uses only relevant portions of the image. However, training hard attention models with only class label supervision is challenging, and hard attention has proved difficult to scale to complex datasets. Here, we propose a novel hard attention model, which we term Saccader. Key to Saccader is a pretraining step that requires only class labels and provides initial attention locations for policy gradient optimization. Our best models narrow the gap to common ImageNet baselines, achieving $75\%$ top-1 and $91\%$ top-5 while attending to less than one-third of the image.


Saccader: Improving Accuracy of Hard Attention Models for Vision

Neural Information Processing Systems

Although deep convolutional neural networks achieve state-of-the-art performance across nearly all image classification tasks, their decisions are difficult to interpret. One approach that offers some level of interpretability by design is \textit{hard attention}, which uses only relevant portions of the image. However, training hard attention models with only class label supervision is challenging, and hard attention has proved difficult to scale to complex datasets. Here, we propose a novel hard attention model, which we term Saccader. Key to Saccader is a pretraining step that requires only class labels and provides initial attention locations for policy gradient optimization.


Reviews: Saccader: Improving Accuracy of Hard Attention Models for Vision

Neural Information Processing Systems

This paper addresses the problem of training hard-attention mechanisms on image classification. To do so, it introduces a new hard-attention layer (called a Saccader cell) with a pretraining procedure that improves performance. More importantly, they show that the approch is more interpretable requiring fewer glimpses than other methods while outperforming other similar approches and being close in performance to non-intepretable models such as ResNet. Originality: The proposed Saccader model is original and compares favorably to state of the art works in term of performance and also, more importantly, interpretability. Related work has been cited adequately.


Saccader: Improving Accuracy of Hard Attention Models for Vision

Neural Information Processing Systems

Although deep convolutional neural networks achieve state-of-the-art performance across nearly all image classification tasks, their decisions are difficult to interpret. One approach that offers some level of interpretability by design is \textit{hard attention}, which uses only relevant portions of the image. However, training hard attention models with only class label supervision is challenging, and hard attention has proved difficult to scale to complex datasets. Here, we propose a novel hard attention model, which we term Saccader. Key to Saccader is a pretraining step that requires only class labels and provides initial attention locations for policy gradient optimization.


Saccader: Improving Accuracy of Hard Attention Models for Vision

Elsayed, Gamaleldin, Kornblith, Simon, Le, Quoc V.

Neural Information Processing Systems

Although deep convolutional neural networks achieve state-of-the-art performance across nearly all image classification tasks, their decisions are difficult to interpret. One approach that offers some level of interpretability by design is \textit{hard attention}, which uses only relevant portions of the image. However, training hard attention models with only class label supervision is challenging, and hard attention has proved difficult to scale to complex datasets. Here, we propose a novel hard attention model, which we term Saccader. Key to Saccader is a pretraining step that requires only class labels and provides initial attention locations for policy gradient optimization. Our best models narrow the gap to common ImageNet baselines, achieving $75\%$ top-1 and $91\%$ top-5 while attending to less than one-third of the image.


Learning Hard Alignments with Variational Inference

Lawson, Dieterich, Chiu, Chung-Cheng, Tucker, George, Raffel, Colin, Swersky, Kevin, Jaitly, Navdeep

arXiv.org Artificial Intelligence

There has recently been significant interest in hard attention models for tasks such as object recognition, visual captioning and speech recognition. Hard attention can offer benefits over soft attention such as decreased computational cost, but training hard attention models can be difficult because of the discrete latent variables they introduce. Previous work used REINFORCE and Q-learning to approach these issues, but those methods can provide high-variance gradient estimates and be slow to train. In this paper, we tackle the problem of learning hard attention for a sequential task using variational inference methods, specifically the recently introduced VIMCO and NVIL. Furthermore, we propose a novel baseline that adapts VIMCO to this setting. We demonstrate our method on a phoneme recognition task in clean and noisy environments and show that our method outperforms REINFORCE, with the difference being greater for a more complicated task.